local explanation
Model Agnostic Supervised Local Explanations
Model interpretability is an increasingly important component of practical machine learning. Some of the most common forms of interpretability systems are example-based, local, and global explanations. One of the main challenges in interpretability is designing explanation systems that can capture aspects of each of these explanation types, in order to develop a more thorough understanding of the model. We address this challenge in a novel model called MAPLE that uses local linear modeling techniques along with a dual interpretation of random forests (both as a supervised neighborhood approach and as a feature selection method). MAPLE has two fundamental advantages over existing interpretability systems. First, while it is effective as a black-box explanation system, MAPLE itself is a highly accurate predictive model that provides faithful self explanations, and thus sidesteps the typical accuracy-interpretability trade-off. Specifically, we demonstrate, on several UCI datasets, that MAPLE is at least as accurate as random forests and that it produces more faithful local explanations than LIME, a popular interpretability system. Second, MAPLE provides both example-based and local explanations and can detect global patterns, which allows it to diagnose limitations in its local explanations.
Model Agnostic Supervised Local Explanations
Gregory Plumb, Denali Molitor, Ameet S. Talwalkar
Model interpretability is an increasingly important component of practical machine learning. Some ofthemost common forms ofinterpretability systems are example-based, local, and global explanations. One of the main challenges in interpretability isdesigning explanation systems thatcancapture aspects ofeach of these explanation types, in order to develop a more thorough understanding of the model. We address this challenge in a novel model called MAPLE that useslocallinearmodeling techniques alongwithadualinterpretation ofrandom forests (both as a supervised neighborhood approach and as a feature selection method).
- North America > United States > Pennsylvania > Allegheny County > Pittsburgh (0.05)
- North America > Canada > Quebec > Montreal (0.04)
- North America > United States > Washington > King County > Seattle (0.05)
- North America > United States > Washington > King County > Redmond (0.04)
- North America > Canada > British Columbia > Metro Vancouver Regional District > Vancouver (0.04)
- Europe > Poland (0.04)
- Europe > Germany > Baden-Württemberg > Tübingen Region > Tübingen (0.04)
- Europe > Italy > Marche > Ancona Province > Ancona (0.04)
- Europe > France > Île-de-France > Paris > Paris (0.04)
- Europe > Greece (0.04)
- Asia > Middle East > Jordan (0.04)
- Law (0.68)
- Banking & Finance (0.46)
- Information Technology > Artificial Intelligence > Representation & Reasoning > Uncertainty (0.68)
- Information Technology > Data Science > Data Mining (0.68)
- Information Technology > Artificial Intelligence > Machine Learning > Learning Graphical Models > Directed Networks > Bayesian Learning (0.68)
- (3 more...)
- North America > United States > New York > New York County > New York City (0.04)
- Europe (0.04)
- North America > Canada > British Columbia > Metro Vancouver Regional District > Vancouver (0.04)
LearningGlobalTransparentModelsConsistentwith LocalContrastiveExplanations
Inthese methods, for an input, an explanation is in the form of a contrast point differing in very few features from the original input and lying inadifferent class. Otherworks tryto build globally interpretable models likedecision trees and rule lists based onthe datausing actual labels orbased ontheblack-box models predictions.
- North America > United States (0.04)
- North America > Canada > British Columbia > Metro Vancouver Regional District > Vancouver (0.04)
- North America > United States > California > Santa Cruz County > Santa Cruz (0.04)
- North America > United States > California > Santa Barbara County > Santa Barbara (0.04)
- Asia > Taiwan > Taiwan Province > Taipei (0.04)